Goto

Collaborating Authors

 manifold information



Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks

Neural Information Processing Systems

Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms. However, it often degrades the model performance on normal images and more importantly, the defense does not generalize well to novel attacks. Given the success of deep generative models such as GANs and VAEs in characterizing the underlying manifold of images, we investigate whether or not the aforementioned deficiencies of adversarial training can be remedied by exploiting the underlying manifold information. To partially answer this question, we consider the scenario when the manifold information of the underlying data is available. We use a subset of ImageNet natural images where an approximate underlying manifold is learned using StyleGAN.


Dual Manifold Adversarial Robustness: Defense against L p and non-L p Adversarial Attacks A OM-ImageNet Details A.1 Overview

Neural Information Processing Systems

Figure 1: Visual comparison between original images and projected images. All the classification models are trained using two P6000 GPUs with a batch size of 64 for 20 epochs. We study how different choices affect the robustness of the trained networks against unseen attacks. Table 4: Classification accuracy against unseen attacks applied to OM-ImageNet test set. Table 5. 3 Table 5: Classification accuracy against known (PGD-50 and OM-PGD-50) and unseen attacks Brighter colors indicate larger absolute differences.



In this work, we consider the scenario when the 1 manifold information is exact and show that this information can be very useful for improving robustness to novel

Neural Information Processing Systems

How DMA T can be exploited for standard tasks/datasets? PGD should not be viewed as the strongest attack for evaluation. Results are shown in Table B. Results are presented in the last column of Table B. DMA T Other strong baselines such as TRADES should be included in the main paper . The notion of "manifold" should be clarified. We will explain this further in the paper.


Dual Manifold Adversarial Robustness: Defense against Lp and non-Lp Adversarial Attacks

Neural Information Processing Systems

Adversarial training is a popular defense strategy against attack threat models with bounded Lp norms. However, it often degrades the model performance on normal images and more importantly, the defense does not generalize well to novel attacks. Given the success of deep generative models such as GANs and VAEs in characterizing the underlying manifold of images, we investigate whether or not the aforementioned deficiencies of adversarial training can be remedied by exploiting the underlying manifold information. To partially answer this question, we consider the scenario when the manifold information of the underlying data is available. We use a subset of ImageNet natural images where an approximate underlying manifold is learned using StyleGAN.


Learning Low-dimensional Manifolds for Scoring of Tissue Microarray Images

Yan, Donghui, Zou, Jian, Li, Zhenpeng

arXiv.org Artificial Intelligence

Tissue microarray (TMA) images have emerged as an important high-throughput tool for cancer study and the validation of biomarkers. Efforts have been dedicated to further improve the accuracy of TACOMA, a cutting-edge automatic scoring algorithm for TMA images. One major advance is due to deepTacoma, an algorithm that incorporates suitable deep representations of a group nature. Inspired by the recent advance in semi-supervised learning and deep learning, we propose mfTacoma to learn alternative deep representations in the context of TMA image scoring. In particular, mfTacoma learns the low-dimensional manifolds, a common latent structure in high dimensional data. Deep representation learning and manifold learning typically requires large data. By encoding deep representation of the manifolds as regularizing features, mfTacoma effectively leverages the manifold information that is potentially crude due to small data. Our experiments show that deep features by manifolds outperforms two alternatives -- deep features by linear manifolds with principal component analysis or by leveraging the group property.


Local Centroids Structured Non-Negative Matrix Factorization

Gao, Hongchang (University of Texas at Arlington) | Nie, Feiping (University of Texas at Arlington) | Huang, Heng (University of Texas at Arlington)

AAAI Conferences

Non-negative Matrix Factorization (NMF) has attracted much attention and been widely used in real-world applications. As a clustering method, it fails to handle the case where data points lie in a complicated geometry structure. Existing methods adopt single global centroid for each cluster, failing to capture the manifold structure. In this paper, we propose a novel local centroids structured NMF to address this drawback. Instead of using single centroid for each cluster, we introduce multiple local centroids for individual cluster such that the manifold structure can be captured by the local centroids. Such a novel NMF method can improve the clustering performance effectively. Furthermore, a novel bipartite graph is incorporated to obtain the clustering indicator directly without any post process. Experiments on both toy datasets and real-world datasets have verified the effectiveness of the proposed method.


Semi-Supervised Multitask Learning

Liu, Qiuhua, Liao, Xuejun, Carin, Lawrence

Neural Information Processing Systems

A semi-supervised multitask learning (MTL) framework is presented, in which M parameterized semi-supervised classifiers, each associated with one of M partially labeled data manifolds, are learned jointly under the constraint of a softsharing prior imposed over the parameters of the classifiers. The unlabeled data are utilized by basing classifier learning on neighborhoods, induced by a Markov random walk over a graph representation of each manifold. Experimental results on real data sets demonstrate that semi-supervised MTL yields significant improvements in generalization performance over either semi-supervised single-task learning (STL) or supervised MTL.


Semi-Supervised Multitask Learning

Liu, Qiuhua, Liao, Xuejun, Carin, Lawrence

Neural Information Processing Systems

A semi-supervised multitask learning (MTL) framework is presented, in which M parameterized semi-supervised classifiers, each associated with one of M partially labeled data manifolds, are learned jointly under the constraint of a softsharing prior imposed over the parameters of the classifiers. The unlabeled data are utilized by basing classifier learning on neighborhoods, induced by a Markov random walk over a graph representation of each manifold. Experimental results on real data sets demonstrate that semi-supervised MTL yields significant improvements in generalization performance over either semi-supervised single-task learning (STL) or supervised MTL.